Goto

Collaborating Authors

 neural activity


Biologists discover gene that may determine 'good' and 'bad' dads

Popular Science

Science Biology Evolution Biologists discover gene that may determine'good' and'bad' dads Social cues and surprising genetics may affect mammal fathers. Breakthroughs, discoveries, and DIY tips sent six days a week. Most mammals grow up in single parent homes. It's estimated that over 95 percent of the planet's nearly 6,000 known mammalian species rely almost exclusively on mothers to nurture and raise their offspring. But even when dads stick around, it's not always smooth sailing.



Trial matching: capturing variability with data-constrained spiking neural networks

Neural Information Processing Systems

Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a mouse cortical sensory-motor pathway in a tactile detection task reported by licking with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse.



A Unified, Scalable Framework for Neural Population Decoding

Neural Information Processing Systems

Unlike the case for text--wherein every document written in a given language shares a basic lexicon for tokenization--there is no one-to-one correspondence between neurons in different individuals.


Supplementary Materials

Neural Information Processing Systems

Finally, the data was subsampled by a factor of 2. Data augmentation TX features were augmented by adding two types of artificial noise. Each session day has its own affine transform layer. RNN training hyperparameters The hyperparameters for RNN training are listed in Table 1. Table 1: RNN training hyperparameters Description Hyperparameter Learning rate 0.01 Batch size 48 Number of training batches 20000 Number of hidden units in the GRU 512 Number of GRU layers 2 Dropout rate in the GRU 0.4 Optimizer Adam Learning rate decay schedule Linear L2 weight regularization 1e-5 Maximum gradient norm for clipping 10 1.2 Language model training details Out-of-vocabulary words were mapped to a special token. In our case, T contains all 26 English letters, 5 punctuation marks, and the CTC blank symbol.